{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test 2: Machine learning\n",
"\n",
"In this test we will use the entire dataset from the walmart kaggle challenge, do some feature engineering and data munging, then fit a random forest model to our data.\n",
"\n",
"Again, the data is a csv file which contains one line for each scan on their system, with a Upc, Weekday, ScanCount, DepartmentDescription and FinelineNumber.\n",
"\n",
"The VisitNumber column groups our data into baskets - Every unique VisitNumber is a unique basket, with a basket possibly containing multiple scans.\n",
"\n",
"The label is the TripType column, which is Walmarts proprietary way of clustering their visits into categories. We wish to match their algorithm, and predict the category of some of our held out data.\n",
"\n",
"This time we will use the full dataset - we have about 650,000 lines, in about 100,000 baskets. Just as a heads up, using 100 classifiers, my answer to the test takes less than 3 minutes to run - no need for hours and hours of computation.\n",
"\n",
"If you do need to run this script multiple times, download the dataset from the website rather than redownloading each time, as it's around 30 mb.\n",
"\n",
"Please answer the questions in the cells below them - feel free to answer out of order, but leave comments saying where you carried out the answer. I am working more or less step by step through my answer - Feel free to add on extra predictors if you can think of them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1\\. Import the modules you will use for the rest of the test:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"from sklearn import ensemble\n",
"from sklearn.cross_validation import train_test_split\n",
"import operator"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"2\\. Read in the data, and check its head. The data is available on the website at: http://jeremy.kiwi.nz/pythoncourse/assets/tests/test2data.csv"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"
\n",
" \n",
" \n",
" | \n",
" TripType | \n",
" VisitNumber | \n",
" Weekday | \n",
" Upc | \n",
" ScanCount | \n",
" DepartmentDescription | \n",
" FinelineNumber | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 999 | \n",
" 5 | \n",
" Friday | \n",
" 6.811315e+10 | \n",
" -1 | \n",
" FINANCIAL SERVICES | \n",
" 1000.0 | \n",
"
\n",
" \n",
" 1 | \n",
" 30 | \n",
" 7 | \n",
" Friday | \n",
" 6.053882e+10 | \n",
" 1 | \n",
" SHOES | \n",
" 8931.0 | \n",
"
\n",
" \n",
" 2 | \n",
" 30 | \n",
" 7 | \n",
" Friday | \n",
" 7.410811e+09 | \n",
" 1 | \n",
" PERSONAL CARE | \n",
" 4504.0 | \n",
"
\n",
" \n",
" 3 | \n",
" 26 | \n",
" 8 | \n",
" Friday | \n",
" 2.238404e+09 | \n",
" 2 | \n",
" PAINT AND ACCESSORIES | \n",
" 3565.0 | \n",
"
\n",
" \n",
" 4 | \n",
" 26 | \n",
" 8 | \n",
" Friday | \n",
" 2.006614e+09 | \n",
" 2 | \n",
" PAINT AND ACCESSORIES | \n",
" 1017.0 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" TripType VisitNumber Weekday Upc ScanCount \\\n",
"0 999 5 Friday 6.811315e+10 -1 \n",
"1 30 7 Friday 6.053882e+10 1 \n",
"2 30 7 Friday 7.410811e+09 1 \n",
"3 26 8 Friday 2.238404e+09 2 \n",
"4 26 8 Friday 2.006614e+09 2 \n",
"\n",
" DepartmentDescription FinelineNumber \n",
"0 FINANCIAL SERVICES 1000.0 \n",
"1 SHOES 8931.0 \n",
"2 PERSONAL CARE 4504.0 \n",
"3 PAINT AND ACCESSORIES 3565.0 \n",
"4 PAINT AND ACCESSORIES 1017.0 "
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dat = pd.read_csv(\"c:/users/jeremy/desktop/kaglewalmart/data/train.csv\")\n",
"dat.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"3\\. Fix the Weekday and DepartmentDescription into dummified data. For now they can be seperate dataframes"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"
\n",
" \n",
" \n",
" | \n",
" 1-HR PHOTO | \n",
" ACCESSORIES | \n",
" AUTOMOTIVE | \n",
" BAKERY | \n",
" BATH AND SHOWER | \n",
" BEAUTY | \n",
" BEDDING | \n",
" BOOKS AND MAGAZINES | \n",
" BOYS WEAR | \n",
" BRAS & SHAPEWEAR | \n",
" ... | \n",
" SEAFOOD | \n",
" SEASONAL | \n",
" SERVICE DELI | \n",
" SHEER HOSIERY | \n",
" SHOES | \n",
" SLEEPWEAR/FOUNDATIONS | \n",
" SPORTING GOODS | \n",
" SWIMWEAR/OUTERWEAR | \n",
" TOYS | \n",
" WIRELESS | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
"
\n",
" \n",
" 1 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
"
\n",
" \n",
" 2 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
"
\n",
" \n",
" 3 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
"
\n",
" \n",
" 4 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
"
\n",
" \n",
"
\n",
"
5 rows × 68 columns
\n",
"
"
],
"text/plain": [
" 1-HR PHOTO ACCESSORIES AUTOMOTIVE BAKERY BATH AND SHOWER BEAUTY \\\n",
"0 0.0 0.0 0.0 0.0 0.0 0.0 \n",
"1 0.0 0.0 0.0 0.0 0.0 0.0 \n",
"2 0.0 0.0 0.0 0.0 0.0 0.0 \n",
"3 0.0 0.0 0.0 0.0 0.0 0.0 \n",
"4 0.0 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
" BEDDING BOOKS AND MAGAZINES BOYS WEAR BRAS & SHAPEWEAR ... \\\n",
"0 0.0 0.0 0.0 0.0 ... \n",
"1 0.0 0.0 0.0 0.0 ... \n",
"2 0.0 0.0 0.0 0.0 ... \n",
"3 0.0 0.0 0.0 0.0 ... \n",
"4 0.0 0.0 0.0 0.0 ... \n",
"\n",
" SEAFOOD SEASONAL SERVICE DELI SHEER HOSIERY SHOES \\\n",
"0 0.0 0.0 0.0 0.0 0.0 \n",
"1 0.0 0.0 0.0 0.0 1.0 \n",
"2 0.0 0.0 0.0 0.0 0.0 \n",
"3 0.0 0.0 0.0 0.0 0.0 \n",
"4 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
" SLEEPWEAR/FOUNDATIONS SPORTING GOODS SWIMWEAR/OUTERWEAR TOYS WIRELESS \n",
"0 0.0 0.0 0.0 0.0 0.0 \n",
"1 0.0 0.0 0.0 0.0 0.0 \n",
"2 0.0 0.0 0.0 0.0 0.0 \n",
"3 0.0 0.0 0.0 0.0 0.0 \n",
"4 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
"[5 rows x 68 columns]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#now fix the categorical variables\n",
"weekdum = pd.get_dummies(dat['Weekday'])\n",
"weekdum.head()\n",
"departdum = pd.get_dummies(dat['DepartmentDescription'])\n",
"departdum.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"4\\. Drop the unneeded columns from the raw data - I suggest removing - 'Weekday', 'Upc', 'DepartmentDescription' and 'FinelineNumber' (we could dummify Upc and FineLine, but this will massively increase our data size.)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"
\n",
" \n",
" \n",
" | \n",
" TripType | \n",
" VisitNumber | \n",
" ScanCount | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 999 | \n",
" 5 | \n",
" -1 | \n",
"
\n",
" \n",
" 1 | \n",
" 30 | \n",
" 7 | \n",
" 1 | \n",
"
\n",
" \n",
" 2 | \n",
" 30 | \n",
" 7 | \n",
" 1 | \n",
"
\n",
" \n",
" 3 | \n",
" 26 | \n",
" 8 | \n",
" 2 | \n",
"
\n",
" \n",
" 4 | \n",
" 26 | \n",
" 8 | \n",
" 2 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" TripType VisitNumber ScanCount\n",
"0 999 5 -1\n",
"1 30 7 1\n",
"2 30 7 1\n",
"3 26 8 2\n",
"4 26 8 2"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#drop the useless columns:\n",
"dat = dat.drop(['Weekday', 'Upc', 'DepartmentDescription', 'FinelineNumber'], axis = 1)\n",
"dat.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"5\\. Correct the Dummified data for number bought in each ScanCount. I would recommend something like:\n",
"\n",
"`departdummies.multiply(dat['ScanCount'], axis = 0)`"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#correct for scancount\n",
"departdum = departdum.multiply(dat['ScanCount'], axis = 0)\n",
"departdum['ScanCount'] = dat['ScanCount']\n",
"dat = dat.drop(['ScanCount'], axis = 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"6\\. Concatenate back together the dummy variables with the main dataframe"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"
\n",
" \n",
" \n",
" | \n",
" TripType | \n",
" VisitNumber | \n",
" Friday | \n",
" Monday | \n",
" Saturday | \n",
" Sunday | \n",
" Thursday | \n",
" Tuesday | \n",
" Wednesday | \n",
" 1-HR PHOTO | \n",
" ... | \n",
" SEASONAL | \n",
" SERVICE DELI | \n",
" SHEER HOSIERY | \n",
" SHOES | \n",
" SLEEPWEAR/FOUNDATIONS | \n",
" SPORTING GOODS | \n",
" SWIMWEAR/OUTERWEAR | \n",
" TOYS | \n",
" WIRELESS | \n",
" ScanCount | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 999 | \n",
" 5 | \n",
" 1.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" -0.0 | \n",
" ... | \n",
" -0.0 | \n",
" -0.0 | \n",
" -0.0 | \n",
" -0.0 | \n",
" -0.0 | \n",
" -0.0 | \n",
" -0.0 | \n",
" -0.0 | \n",
" -0.0 | \n",
" -1 | \n",
"
\n",
" \n",
" 1 | \n",
" 30 | \n",
" 7 | \n",
" 1.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 1.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 1 | \n",
"
\n",
" \n",
" 2 | \n",
" 30 | \n",
" 7 | \n",
" 1.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 1 | \n",
"
\n",
" \n",
" 3 | \n",
" 26 | \n",
" 8 | \n",
" 1.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 2 | \n",
"
\n",
" \n",
" 4 | \n",
" 26 | \n",
" 8 | \n",
" 1.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 2 | \n",
"
\n",
" \n",
"
\n",
"
5 rows × 78 columns
\n",
"
"
],
"text/plain": [
" TripType VisitNumber Friday Monday Saturday Sunday Thursday Tuesday \\\n",
"0 999 5 1.0 0.0 0.0 0.0 0.0 0.0 \n",
"1 30 7 1.0 0.0 0.0 0.0 0.0 0.0 \n",
"2 30 7 1.0 0.0 0.0 0.0 0.0 0.0 \n",
"3 26 8 1.0 0.0 0.0 0.0 0.0 0.0 \n",
"4 26 8 1.0 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
" Wednesday 1-HR PHOTO ... SEASONAL SERVICE DELI SHEER HOSIERY \\\n",
"0 0.0 -0.0 ... -0.0 -0.0 -0.0 \n",
"1 0.0 0.0 ... 0.0 0.0 0.0 \n",
"2 0.0 0.0 ... 0.0 0.0 0.0 \n",
"3 0.0 0.0 ... 0.0 0.0 0.0 \n",
"4 0.0 0.0 ... 0.0 0.0 0.0 \n",
"\n",
" SHOES SLEEPWEAR/FOUNDATIONS SPORTING GOODS SWIMWEAR/OUTERWEAR TOYS \\\n",
"0 -0.0 -0.0 -0.0 -0.0 -0.0 \n",
"1 1.0 0.0 0.0 0.0 0.0 \n",
"2 0.0 0.0 0.0 0.0 0.0 \n",
"3 0.0 0.0 0.0 0.0 0.0 \n",
"4 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
" WIRELESS ScanCount \n",
"0 -0.0 -1 \n",
"1 0.0 1 \n",
"2 0.0 1 \n",
"3 0.0 2 \n",
"4 0.0 2 \n",
"\n",
"[5 rows x 78 columns]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dat = pd.concat([dat, weekdum, departdum], axis = 1)\n",
"dat.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"7\\. Summarise the data for each basket (hint, if you groupby columns, an .agg() method will not apply to them)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"\n",
"
\n",
" \n",
" \n",
" | \n",
" | \n",
" Friday | \n",
" Monday | \n",
" Saturday | \n",
" Sunday | \n",
" Thursday | \n",
" Tuesday | \n",
" Wednesday | \n",
" 1-HR PHOTO | \n",
" ACCESSORIES | \n",
" AUTOMOTIVE | \n",
" ... | \n",
" SEASONAL | \n",
" SERVICE DELI | \n",
" SHEER HOSIERY | \n",
" SHOES | \n",
" SLEEPWEAR/FOUNDATIONS | \n",
" SPORTING GOODS | \n",
" SWIMWEAR/OUTERWEAR | \n",
" TOYS | \n",
" WIRELESS | \n",
" ScanCount | \n",
"
\n",
" \n",
" TripType | \n",
" VisitNumber | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
" | \n",
"
\n",
" \n",
" \n",
" \n",
" 3 | \n",
" 106 | \n",
" 2.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 2 | \n",
"
\n",
" \n",
" 121 | \n",
" 2.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 2 | \n",
"
\n",
" \n",
" 153 | \n",
" 2.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 2 | \n",
"
\n",
" \n",
" 162 | \n",
" 2.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 2 | \n",
"
\n",
" \n",
" 164 | \n",
" 2.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" ... | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 0.0 | \n",
" 2 | \n",
"
\n",
" \n",
"
\n",
"
5 rows × 76 columns
\n",
"
"
],
"text/plain": [
" Friday Monday Saturday Sunday Thursday Tuesday \\\n",
"TripType VisitNumber \n",
"3 106 2.0 0.0 0.0 0.0 0.0 0.0 \n",
" 121 2.0 0.0 0.0 0.0 0.0 0.0 \n",
" 153 2.0 0.0 0.0 0.0 0.0 0.0 \n",
" 162 2.0 0.0 0.0 0.0 0.0 0.0 \n",
" 164 2.0 0.0 0.0 0.0 0.0 0.0 \n",
"\n",
" Wednesday 1-HR PHOTO ACCESSORIES AUTOMOTIVE \\\n",
"TripType VisitNumber \n",
"3 106 0.0 0.0 0.0 0.0 \n",
" 121 0.0 0.0 0.0 0.0 \n",
" 153 0.0 0.0 0.0 0.0 \n",
" 162 0.0 0.0 0.0 0.0 \n",
" 164 0.0 0.0 0.0 0.0 \n",
"\n",
" ... SEASONAL SERVICE DELI SHEER HOSIERY SHOES \\\n",
"TripType VisitNumber ... \n",
"3 106 ... 0.0 0.0 0.0 0.0 \n",
" 121 ... 0.0 0.0 0.0 0.0 \n",
" 153 ... 0.0 0.0 0.0 0.0 \n",
" 162 ... 0.0 0.0 0.0 0.0 \n",
" 164 ... 0.0 0.0 0.0 0.0 \n",
"\n",
" SLEEPWEAR/FOUNDATIONS SPORTING GOODS \\\n",
"TripType VisitNumber \n",
"3 106 0.0 0.0 \n",
" 121 0.0 0.0 \n",
" 153 0.0 0.0 \n",
" 162 0.0 0.0 \n",
" 164 0.0 0.0 \n",
"\n",
" SWIMWEAR/OUTERWEAR TOYS WIRELESS ScanCount \n",
"TripType VisitNumber \n",
"3 106 0.0 0.0 0.0 2 \n",
" 121 0.0 0.0 0.0 2 \n",
" 153 0.0 0.0 0.0 2 \n",
" 162 0.0 0.0 0.0 2 \n",
" 164 0.0 0.0 0.0 2 \n",
"\n",
"[5 rows x 76 columns]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dat1 = dat.groupby(['TripType', 'VisitNumber']).agg(sum)\n",
"dat1.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"8\\. Use the reset_index() method to remove your groupings. As we did not cover multiple indices in the lesson, my answer was \n",
"\n",
"`dat1 = dat1.reset_index()`"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"dat1 = dat1.reset_index()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"9\\. Split the data into training and testing sets: Use 0.25 of the data in the test set."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"classes = dat1.TripType\n",
"dat1 = dat1.drop('TripType', axis = 1)\n",
"classes.head()\n",
"\n",
"X_train, X_test, y_train, y_test = \\\n",
" train_test_split(dat1, classes, test_size = 0.25, random_state = 0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"10\\. Plot the training data using matplotlib or seaborn. Choose at least 3 meaningful plots to present aspects of the data."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#lots of good answers here"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"11\\. Take out the TripType from our dataframe - we don't want our label as a feature. \n",
"\n",
"Make sure to save it somewhere though, as our model needs to be fit to these labels."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#see part 9"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"11\\. Describe and fit a randomForest Classifer with 100 `n_estimators`. "
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',\n",
" max_depth=None, max_features='auto', max_leaf_nodes=None,\n",
" min_samples_leaf=1, min_samples_split=2,\n",
" min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=1,\n",
" oob_score=False, random_state=None, verbose=0,\n",
" warm_start=False)"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model = ensemble.RandomForestClassifier(n_estimators=100)\n",
"model.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"13\\. What is the score of the model on the training data?"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"0.99994425475576609"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.score(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"14\\. What is the score of the model on the testing data?"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"0.65140683138927213"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.score(X_test, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"15\\. What is the most important variable? Can you explain the model?"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Feature ScanCount was the most important, with an importance value of 0.16855133760881952\n"
]
}
],
"source": [
"importances = model.feature_importances_\n",
"max_index, max_value = max(enumerate(importances), key=operator.itemgetter(1))\n",
"print('Feature {x} was the most important, with an importance value of {y}'.format(x = dat1.columns[max_index], y = max_value))"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"random forests are notoriously difficult to interpret - any explantion here was fine\n"
]
}
],
"source": [
"print('random forests are notoriously difficult to interpret - any explantion here was fine')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Thanks for taking the Python Course!\n",
"\n",
"Please save your notebook file as 'your name - test2.ipynb', and email it to jeremycgray+pythoncourse@gmail.com by the 29th of April."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}